Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: block.timestamp is not accurate #3398

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

thomas-nguy
Copy link
Member

@thomas-nguy thomas-nguy commented Dec 18, 2024

What ❔

Related to zkSync-Community-Hub/zksync-developers#820

Change the l2 block creation logic to start a new l2 block only when a transaction is ready to be executed.

Why ❔

Current logic start a new l2 block as soon as the previous one is sealed.

A contract that relies on block.timestamp would be able to predict the time correctly because if the l2 block goes stale (no transaction), then it will be open indefinitely and the timestamp will not be accurate anymore

Solution has been tested locally but any feedbacks would be appreciated

Checklist

  • PR title corresponds to the body of PR (we generate changelog entries from PRs).
  • Tests for the changes have been added / updated.
  • Documentation comments have been added / updated.
  • Code has been formatted via zkstack dev fmt and zkstack dev lint.

@@ -89,6 +90,7 @@ impl ZkSyncStateKeeper {
sealer,
storage_factory,
health_updater: ReactiveHealthCheck::new("state_keeper").1,
should_create_l2_block: false,
Copy link
Member Author

@thomas-nguy thomas-nguy Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if we should persist this in rocksdb to prevent issues at restart?

@thomas-nguy thomas-nguy changed the title Fix: block.timestamp is not accurate fix: block.timestamp is not accurate Dec 18, 2024
@popzxc popzxc requested a review from slowli January 9, 2025 08:07
@popzxc
Copy link
Member

popzxc commented Jan 9, 2025

@slowli PTAL

Copy link
Contributor

@slowli slowli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be worth exploring a slightly differing approach: decoupling setting a new block in the UpdatesManager and in BatchExecutor. Namely, as soon as a new block is added in the current workflow, it is still added in UpdatesManager, but is not sent to BatchExecutor. Instead, it is only sent to BatchExecutor after receiving the first transaction in the block with the updated timestamp (obviously, the timestamp needs to be updated in UpdatesManager as well). IMO, this would make it slightly easier to reason about correctness.

tracing::debug!(
"L2 block #{} (L1 batch #{}) should be sealed as per sealing rules",
updates_manager.l2_block.number,
updates_manager.l1_batch.number
);
self.seal_l2_block(updates_manager).await?;
self.should_create_l2_block = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand the purpose of this variable. AFAIU, the logic here conceptually should change as follows:

  • After sealing the block, do not start the next block immediately; instead, set a local var whether to start it.
  • Wait for the next transaction.
  • After receiving a transaction, if the flag is set, start a new block and unset the flag.

The logic here almost follows this flow, but the flag is non-local, which complicates reasoning.

Copy link
Member Author

@thomas-nguy thomas-nguy Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this could totally be the flag local!

The reason I have it global is because I need to know the state of the last fictive block when closing the batch in the parent loop (parent function).

  • Case 1 : The last block has not been sealed. This is the original behavior before the PR change because we are always creating new unsealed block right after sealing one wether we receive new transaction or not. This is why we are "always" sealing the last block in the parent loop before closing the batch.

  • Case 2: The last block has been sealed, but no transaction has been received for some period of time and ultimately we are "forced" to close the batch. In that case we are in a weird state where the last block has been sealed but no new block has started and we should not seal the last block again in the parent loop

It seems a bit hacky indeed but it was the best way I found in order to not introduce too much change in this PR.

Perhaps I can completely remove the sealing logic in the parent loop so that we won't need global flag and turn this into a local flag? This could be much easier to understand and yes, the flow you are describing is exactly what the PR is trying to do.

Comment on lines +192 to +195
if !self.should_create_l2_block {
// l2 block has been already sealed
self.seal_l2_block(&updates_manager).await?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This place is confusing with the proposed changes. The check above checks whether the latest block contains any transactions. AFAIU, if should_create_l2_block is true, then the check doesn't concern the latest block, but rather the previous one; the latest block isn't really started yet. So, a fictive block must be started in any case. IIUC, the current approach technically works because the previous block exists and is non-empty, but it looks hacky.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants